Fears about the effects of misinformation on Indian politics seem omnipresent today. Reports suggest huge volumes of “fake news” and misleading content filling up WhatsApp groups and social media feeds, with potentially dangerous consequences. The advent of generative artificial intelligence and “deepfakes” have only made those concerns more immediate.

But how big is the misinformation problem in India? What do we know about it? And what can we do to address it?

In the sixth interview of the CASI Election Conversations 2024, CASI Consulting Editor Rohan Venkat speaks to Sumitra Badrinathan, Assistant Professor of Political Science at American University, about the state of research on the subject, a recent paper of hers that examines efforts to correct misinformation related to vigilante violence in India, and the urgent need for more scholarship examining fake news in the Global South.

Why is it important for us to be studying misinformation, particularly in an Indian context?

Studying misinformation in academia is new. A lot of literature about the topic started after Donald Trump got elected in 2016. And that’s not that long ago, which means there’s a lot we don’t know. If I had to divide misinformation research into three broad questions, I would say that we’re trying to look at:

● What are the reasons that people believe misinformation? Why are we vulnerable to it?

● What can we do about it? What are the solutions to counter it?

● What are the consequences? What happens after you believe misinformation now?

Even in the research that we have so far, it’s largely concentrated in Western countries. And for the most part, it only looks at that second sub-question: what can we do to counter misinformation? We know a little bit from psychology about why people believe it, but we know next to nothing about what its consequences are.

So, what happens when you believe misinformation? Does misinformation cause you to change your votes? Does it cause you to become violent? Does it increase polarisation? These are, unfortunately, questions that we don’t really have answers to – not just in India, but anywhere in the world.

There’s a recent piece reviewing all of the experimental and other academic work on misinformation by Rob Blair and co-authors which finds that over 80% of the studies that have been conducted on misinformation are in a Western Europe or American context. There are so many reasons to believe that those contexts don’t generalise to India for a bunch of different reasons.

The second reason is – is misinformation actually a problem in India? From a normative perspective, not having good information threatens the fabric of democracy because it doesn’t allow you to have the right information to be able to hold your leaders accountable. Apart from it being an empirical place where it’s just that we don’t know much, it is also a deeply normative issue.

You and Simon Chauchard also wrote about the need to look at the differences between Global North and Global South on this.

One of the things that struck us was that most misinformation studies look solely at online platforms. They have a presumption that misinformation is generally an online construct that they spread on online platforms, and solutions to misinformation should be geared toward platforms in some way – like putting fact-checking tags or labels on false information, and having people see those labels and respond to whether labelling something as false makes you change your perspective. Another thing that people have looked at is what researchers call “pre-bunking.” It’s the idea that we can dispel misinformation even before it reaches people.

The common thread for all these interventions seems to be that they deal with platforms, specifically with Facebook and Twitter. You and I know that those aren’t the main platforms in India. WhatsApp is the main one, and it is substantially different from these platforms because it’s encrypted. It’s almost like email. If you are the sender and the receiver, it’s just the two of us that can see the content of a message. This idea that the platform can slap on a tag that says that something is false or untrue is just not feasible.

But we also know that India is a very collectivistic society where we talk to our neighbours, we talk to our friends and family. A lot of information and news sharing, not just misinformation, happens in very communal spaces, that are just not the context of the global north that most research looks at.

We start from this premise that misinformation is not solely an online phenomenon; not just in India, but in many countries of the Global South that have political and cultural societies like India. A lot of the findings and methods that people use to study misinformation are just not feasible for contexts such as this. I can’t really take one of those studies and say, “I’m going to replicate it in a context like India”.

The second is a very different context from places like the US or Western Europe in terms of people’s levels of digital literacy and familiarity with the internet. It’s also a different context in terms of politics and the kinds of relationships we have to groups in society. A lot of the literature talks about Democrats and Republicans. That kind of context doesn’t easily map onto India, where you have not just several parties, but also other cleavages that are very primordial in the way people think of their identities – religion, caste, and so on. Even theories, not just methods, don’t easily apply.

In many ways, we’ve seen very good journalistic reporting coming out of India about how platforms sometimes collude with political elites to either help or curb their interests. This seems to be a feature that doesn’t quite exist in the Western context.

In the past, researchers have had success doing what we call “in the wild” field experiments, where they’ve gotten platforms to change the way that information is presented to people. It’s a goal that we would love to achieve in field work. But I can’t think of that ever happening in a context like India where it seems like platforms themselves are not independent, that they are, in many ways, behest to the interests of the political or other elites in the country.

Do we have a sense of the scale of the misinformation problem in India?

It’s a really difficult question to answer empirically because WhatsApp is a platform that we can’t really get into as researchers.

There is some work by Kiran Garimella and co-authors which basically scrapes content from WhatsApp public groups that are both maintained and owned by political parties, and also private citizens around election issues and otherwise, to see how much of the content out there is misinformation. Those studies are rare, but they also look at a very specific subset of WhatsApp groups, which are public groups. We have close to no idea about what’s going on in private spaces, and I think more importantly, we have no idea what’s going on in offline spaces.

A lot of the pressures that people face online or some of the affordances they get from being anonymous play out very differently when you’re talking to people in a group offline. There’s a second issue which is that we don’t know if misinformation is, itself, a problem or whether it’s a symptom of another problem like polarisation.

This is a chicken and egg problem, not particular to India. It’s difficult to accurately identify the causal chain in these processes that lead us to perform certain behaviours like voting or other acts that tend to attenuate democratic norms in some way, such as support for violence. This is the kind of question that I think would be amazing for us to answer as an academic community, but it’s also the most difficult. There’s very little proof anywhere in any context about what misinformation can do. And that’s a larger problem that needs to be studied.

I come at it from more of a normative perspective. In a well-functioning healthy society, misinformation from the top shouldn’t exist, and hence, we as academics, if we can do something about it, we should.

From an empirical perspective, some argue this is a larger problem than A, B, C, and D, and hence, that’s why we’re studying it. The World Economic Forum wrote a report recently saying that one of the largest threats to national security is misinformation, over and above a bunch of other issues that they named. We don’t really have evidence to make that kind of enumeration. But from a normative perspective, I think we all agree that it’s something that we need to look at and that’s why we study it.

When looking at the West, has there been time enough for the field to settle? Are there settled approaches in studying misinformation?

Other misinformation researchers would maybe agree to a certain extent about what works and whether there are settled approaches to what works. My opinion is that I’m not sure that we can take that seriously when we’re applying those insights to countries like India because they’re so different.

But what I will say the field agrees on, and this is something that’s been shown empirically in context across the world, including in India, Cote d’Ivoire, Brazil, and the US, is the reason why we’re vulnerable to misinformation – a psychological reason that depends less on cultural contexts or less on socioeconomic status. And that reason is what psychologists and our political scientists call motivated reasoning, which is the idea that we as human beings are motivated to reason in a certain way when we encounter information.

Because we as human beings also try to protect ourselves from information that doesn’t sit right with us, what that motivation means is that we have an inclination to accept information that aligns with our previous beliefs and to push away information that doesn’t align with our previous beliefs – simply because we feel dissonance from those beliefs. This process occurs subconsciously. Citizens across the world tend to push away information that doesn’t fit with what they thought they knew. And that’s one of the reasons why we fall prey to misinformation, especially if that misinformation is aligned with our prior beliefs.

The last time I interviewed you was in 2021. We had the results of two things you’d worked on at the time. One big in-person effort to intervene in Bihar turned up surprising results in not making a significant difference, at least on some counts. And another found that with very little effort, it was able to make a dent in how much people were likely to believe misinformation. Where has the field taken us in the interim?

There isn’t that much work to step back and say, “Here’s what we confidently know and here’s what we don’t.” That’s because a lot of this takes a lot of time and there aren’t that many people working on misinformation in India. From the small number of studies that have been conducted, high percentages of people say they believe certain falsehoods, especially if those falsehoods are tied to their religion in some way or their partisan identity.

For instance, some health misinformation stories that are not just solely in the realm of health but that are also political, such as what cow urine can do for the human body – which has been debunked time and time and again by doctors and scientists – every time we sample a new set of Indian citizens across different states, whether it’s online or in person, over 50% of them say that they believe that. That’s a pretty high number, especially compared to some of the stats that we get from US samples that show that even 30% of the population believing something is considered normatively a high number.

While that may seem scary, one thing I think that past research has consistently found is that it’s just as easy to get them to switch their beliefs in the short run. My take on this is that whatever processes are leading people to believe misinformation, the same processes also lead them to believe corrections to misinformation. Now, whether those corrections last in the future and can withstand the onslaught of more information is something that needs to be studied. But in the short run, we’re able to change people’s beliefs. That is helpful because in the short run, people undertake a lot of activities. You may go to vote tomorrow. And so, if I’m able to issue a correction to misinformation that might lead you to have a better set of beliefs before you go to vote, that’s useful even if those beliefs don’t last into the future.

Fact-checking and debunking are relatively less costly and more light-touch interventions compared to media literacy and digital literacy that we like to talk about in civil society. So, maybe the solution is to just constantly fact check and keep putting the right information out there. Though I will say that its efficacy depends not just on the correct information existing in the world, but getting it into the hands of people, which is a separate issue altogether that maybe doesn’t happen in India, which is why you see this contrast between fact-checkers doing their jobs and misinformation persisting.

Let’s move then to a recent paper of yours on vigilantism, which has been conditionally accepted to the American Political Science Review. What were you seeking to understand there and what did you find?

This paper was born out of, for me personally, seeing what happened in Washington DC on January 6. Responses to this moment ranged from “we need solutions to stop misinformation because it was the big lie that led people to their behavior,” versus other sections of both journalists and civil society activists saying, “Well, misinformation doesn’t make you violent.” We saw a lot of violent scenes in the Capitol. How is it possible that someone who’s never done something like this their whole lives suddenly sees a false story and decides to pick up something to beat somebody else up with? That doesn’t track.

This got me and my co-authors, Simon Chauchard, at UC3M in Madrid [Charles the Third University of Madrid] and Niloufer Siddiqui at the University of Albany, thinking about the consequences of misinformation in India and Pakistan and specifically about a particular type of violence – vigilante violence – that has sought to extra-judiciously punish people – minority citizens – for so-called transgressions of norms.

We got thinking about the role that misinformation plays in this process and about whether people support this violence because they’re polarised too, or whether it’s misinformation about the case that led them to support the violence. Researchers in the US and other Western countries, particularly on vaccine misinformation, find that they’re very easily able to correct the actual misinformation by providing fact-checks in some form. They’ll say, “there’s very little evidence that vaccines directly cause autism.” And then they’ll ask people, “okay, do you think that vaccines cause autism?” And most people will respond to the fact-check in the way that you would want them to by saying, “no, they don’t. We believe you”. And then when you ask them the follow-up question, which is, “Okay, so do you intend to get vaccinated?” People don’t change.

This leads some academics to think that correcting misinformation corrects misinformation. It doesn’t do anything else for your downstream beliefs, which in many ways are more important because if it’s not the misinformation that’s moving you to act in certain ways, then what is and what gives? That was our premise for this study. We came from a place of trying to see what leads to people supporting extrajudicial violence, and what can temper their support for this kind of violence?

In the past, a lot of research across the world has looked into the drivers of people’s support for vigilante violence, and a lot of this literature has talked about the relationship between the citizen and their state. This is basically saying that vigilante mobs arise when the state doesn’t fulfill its role in upholding law and order, so there is a need for vigilantes to step in and do something about it. There’s also some research showing that when vigilante groups do not fear sanction from the state, they’re more likely to take matters into their own hands.

Surprisingly, the role of information in this process is close to absent. That’s the gap we wanted to fill in this. We set up an experiment where we pitted all of these reasons against each other. Do people support vigilante mobs because they think the state is inadequate? Because they think they won’t fear punishment from the state? Or because of the actual falsehood at hand?

Consequently, if I correct the misinformation that led to you believing that this mob violence was justified, are you going to reduce your support for that mob violence? Or maybe misinformation doesn’t have anything to do with it, and it all depends on the state? That was an open question and that’s the question that we wanted to study in that paper.

What did you find? Does misinformation have a role or not?

Surprisingly – I say surprisingly because my prior was that misinformation would play less of an important role compared to these other factors – we find that corrected misinformation not only corrects misinformation, but it significantly reduces people’s downstream support for vigilantism. And not in the abstract. It’s tied to specific incidents that are salient in both India and Pakistan that have captured the imagination of polarised people on either side of the political spectrum in recent years.

Correcting them by telling people the premise on which you think this violence is justified was false, and has been debunked time and again, leads them to not just believe that the initial rumor was false, but then also leads them to say, “Okay, we don’t want to support vigilante violence.” In both India and Pakistan, it leads respondents to say that they would be okay with punishing or supporting punishment for the vigilante group that took the law into its own hands, which is illegal. In both cases, people are opting to punish members of their own religious in-group.

This is a really important finding because time and again, research across the world has shown that we tend to put our in-group, whatever that means – Republicans or Democrats, Hindus and Muslims – above everything else. In many cases, people have run experiments showing that citizens are willing to leave monetary benefits on the table for themselves if it means they can undertake an act to benefit their in-group.

In this case, we’re seeing that really strong corrections can dispel that notion and that religiosity, partisanship or group identity doesn’t have to trump civic virtue.

What was the correction, and what are the numbers we’re talking about?

In India, we’re able to shift people to believe less misinformation by about 7%-10%. While that seems small, it’s actually double the size of a lot of misinformation studies that exist. In Pakistan, it’s double that – a 17%-19% effect. And if you can think of scaling this up, those numbers mean a lot in terms of absolute citizens.

The correction itself; we opted to do it in an audio format, which is also not very typical of misinformation literature. We created little audio clippings that sound like radio presentations or maybe news on TV, where we took actual journalistic reporting from both Indian sources as well as some western sources like The New York Times about specific incidents of vigilante violence that had been debunked. And we used that exact language, except we converted it into an audio format. Then we knocked on people’s doors, gave them a personal pair of over-the-ear-headphones to wear, and had them listen.

An example would be people putting their headphones on and hearing the breaking news tune that you hear before every little segment, followed by a minute to a 90-second clip about this particular incident of vigilante violence that happened in this particular district. But it turns out that a lot of reporting from a bunch of sources debunked the misinformation that prompted the vigilantism, and then the clip goes into the details of the debunking.

They’re very short corrections, but I call them strong because there is something to be said about the power of audiovisuals, as opposed to reading a simple correction. Through the use of headphones, we ensure that people are paying attention. This is really important because I think this is a difference between corrections to misinformation existing and people not changing their beliefs, which is, they’re not paying attention. When you get them to, it can have a big difference.

There’s an element to the paper that offers another authority – you do also use tweets from politicians.

The sub-part of the paper is this idea that stems from American politics, that public opinion follows the leader. Whatever elites say, opinion changes accordingly. In the process of doing research for this paper, we found real tweets and otherwise campaign material or other slogans from politicians in India, including the Prime Minister, telling people not to take matters into their own hands, to wait for law in order to investigate the situation as opposed to becoming vigilantes themselves or supporting vigilante violence.

We were interested in seeing, compared to all of these other factors, whether seeing tweets from not just any politician, but Prime Minister Narendra Modi, telling them, “don’t take matters into your own hands”, would actually move people on this count or not. We find that that one particular tweet doesn’t do anything. It doesn’t change people.

But I don’t want to put too much stock into this finding for a bunch of different reasons because the tweet is a visual cue as compared to the audios that we were providing with people. It’s also possible that in the presence of the correction to misinformation, which was a really strong one, something like that tweet just ceased to matter. So, there’s a lot going on before I think we can empirically conclude that messages from elites don’t make a difference.

On a personal note, I was slightly surprised that it didn’t make a difference. I think it’s one of the things that we as co-authors are discussing and need a lot more work into, because this kind of work doesn’t exist in the Indian context. And it opens up this broader question that politicians say things all the time. How seriously do we take them? In what context do we take them seriously? Are there issues where we think they’re bluffing? And if so, are we behaving in a way that we think is showing the party line that maybe differs from people’s actual words? These are all empirically open questions that I think we’re personally very interested in looking at in the next round of this project.

You also found that in one case – violence mentioned in the context of cow or cattle smuggling – the corrections didn’t work. Could you tell us about that?

We didn’t really talk about what the vigilante incidents were. One of the categories was vigilante violence in response to cow-related or beef-related issues, which have come to occupy a very central place in political discourse, at least in certain Indian states, including Uttar Pradesh, where we did this particular study.

We picked one incident, which had happened in the past and had been debunked by various sources, as the story for our intervention. And on that particular count, we found that our correction was actually able to reduce misinformation in the story, but it didn’t reduce downstream effects – meaning support – on vigilantism. Very similar to the health misinformation on vaccines that I was talking about in the US.

We had six stories total in the paper. On five out of the six, we were able to find this effect where you dispel the misinformation, it corrects this information, and it has a downstream effect. And one out of six, particularly to do with vigilante violence that stems from misinformation about beef, we were not able to change people’s downstream attitudes toward the issue, though we were able to correct the misinformation itself.

This is, again, something where we need a lot more research before we can conclusively say what’s going on. But if I were to hazard a guess, I think it’s because this issue has become so salient in people’s minds, the motivated reasoning is so strong, that almost nothing can get people to move their initial beliefs. One takeaway might be that strong corrections need to be issued early in the process before people have really strong and entrenched views about a particular issue, at which point it’s very hard for anything to change them.

This is just human behaviour across the world. When we’ve been hearing about a certain issue for a really long period of time, it’s in everyday conversations, it’s very salient, all politicians are talking about it, all of the media is talking about it, what is one 90-second audio going to do to dispel all of those years of backlog about that case that you’ve had in your head? That doesn’t mean that no correction can talk about cow-related crimes in India. It’s just this particular one didn’t work. It’s something that we hope we can shed more light on in future work.

Going into the 2024 elections in India, is it frustrating how quickly the subject is developing and shifting? Given that now everyone’s moving on to talking about AI-related misinformation, does it feel it is moving away faster than you’re able to get a handle on it?

Since you brought up AI, I do want to mention that maybe sometimes there’s a disconnect between academics and civil society’s interpretations of the way that the world works around misinformation. And this is partly our fault. Academics are not very good at conveying findings to a broader audience outside of our niche workshops and conferences, but AI is one of these issues where everyone seems to be talking about it in India.

Unfortunately, there is little to no evidence about its role in the actual information space. I mentioned Kiran Garimella earlier. He has a new paper that tries to quantify the amount of AI related stories in these public WhatsApp groups and finds that AI generated stuff is less than 1% of viral content. Not any content, but viral content. So, close to zero of the total amount of content that’s out there.

What’s more important is that he goes on to identify the kinds of stories that are being generated by using AI, and it’s mostly inspirational messages, a lot of devotional stuff. Things, in many ways, that are not politically salient or important to things that we care about like voting or social cohesion. I find it hard to square the circle where everyone seems to be talking about AI, and a lot of academics don’t really see the role as being that important in society.

Which gets to another point – we haven’t really solved non-AI misinformation. So, maybe we should start there. A lot of misinformation comes from elites. It comes from campaign speeches. It comes from television. It comes from public spaces where we talk to each other. We don’t need AI to have a lot of wrong information, and that previous problem hasn’t been solved yet.

Sometimes it’s frustrating. This field of misinformation is moving at a very, very fast speed. But I do think the AI craze is an example of commentators from Western settings setting the agenda for what needs to be talked about in countries like the Global South, where we have problems that preceded AI that are still problems, if that makes sense.

Going into this election in India, are there things that you’ll be looking at and thinking about?

When we talk about misinformation generally, people talk about identity-based misinformation. So, “Trump is really amazing and here are some reasons why”. Or “UNESCO voted Prime Minister Modi the best prime minister in the world. And here are some reasons why”. I like to term this identity-based misinformation because it taps into your liking for a particular party or for a particular political leader.

But there’s another type of misinformation that gets swept under the rug because it’s, in many ways, not as sexy, which we see often times on these WhatsApp groups. I’m speaking of misinformation related to development projects in India.

We know that India is a country where the system is very clientelistic, and people depend a lot on the government for goods and services provided to them, especially people outside of urban areas. So, knowledge about local development such as drainage issues, water, roads, electricity, and so on, has always been salient.

But one thing that’s new is that we’re seeing credit-claiming in the form of misinformation about development projects. “Look at this bridge that was built in a village five villages away. That’s really awesome. And that’s the kind of development we’ll give to you if you vote for us.” In reality, that bridge doesn’t exist or is a photo taken from a bridge that’s in a completely different state.

Obviously, the onus on citizens to verify that information is really high because they’re not going to go and check out the bridge. In many ways, it feels like a more deadly form of misinformation as compared to “the Prime Minister is awesome” or “X politician is amazing.” The former can mobilise people who are already on that particular political side. But the latter, the development type misinformation, has the capacity to switch votes. And that’s something we don’t have much data on. So, that is something that a bunch of us are interested in looking at going forward, though studying it systematically is not going to be very easy.

The final question that I usually like to ask, do you have recommendations for those interested in the subject?

I really recommend the report, not an academic paper, but a report for USAID that Robert Blair, Jessica Gottlieb, Laura Paler, and co-authors wrote that summarises all of the misinformation interventions out there, close to 200 of them.

It’s super digestible because it’s written for a policy audience and not just an academic one. And it does a very good job of laying out what we think we know and what we actually know, where there are gaps between solutions that we have optimism for, but no evidence for. They cite and describe every single intervention on India that’s out there, so it’s comprehensive. They also provide a good key to understanding the differences between Global North and Global South contexts and how the same studies have relatively less or more success in these contexts.

Second, Neelanjan Sircar has a really good piece titled, “Disinformation: A new type of state-sponsored violence.” It’s always been pertinent, but it’s perhaps more pertinent now than it ever has been as India’s voting. It’s a normative piece making a lot of theoretical arguments, but it’s a really convincing and moving piece on misinformation in India.

Sumitra Badrinathan is an Assistant Professor of Political Science at American University.

Rohan Venkat is the Consulting Editor for India in Transition and a CASI Spring 2024 Visiting Fellow.

The interview was first published in India in Transition, a publication of the Center for the Advanced Study of India, University of Pennsylvania.